作为在线广告和标记的关键组成部分,点击率(CTR)预测引起了行业和学术界领域的许多关注。最近,深度学习已成为CTR的主流方法论。尽管做出了可持续的努力,但现有的方法仍然构成了一些挑战。一方面,功能之间的高阶相互作用尚未探索。另一方面,高阶相互作用可能会忽略低阶字段的语义信息。在本文中,我们提出了一种名为Fint的新型预测方法,该方法采用了现场感知的交互层,该层捕获了高阶功能交互,同时保留了低阶现场信息。为了凭经验研究金融的有效性和鲁棒性,我们对三个现实数据库进行了广泛的实验:KDD2012,Criteo和Avazu。获得的结果表明,与现有方法相比,该五颗粒可以显着提高性能,而无需增加所需的计算量。此外,提出的方法通过A/B测试使大型在线视频应用程序的广告收入增加了约2.72 \%。为了更好地促进CTR领域的研究,我们发布了我们的代码以及参考实施,网址为:https://github.com/zhishan01/fint。
translated by 谷歌翻译
Motion planning and control in autonomous car racing are one of the most challenging and safety-critical tasks due to high speed and dynamism. The lower-level control nodes are expected to be highly optimized due to resource constraints of onboard embedded processing units, although there are strict latency requirements. Some of these guarantees can be provided at the application level, such as using ROS2's Real-Time executors. However, the performance can be far from satisfactory as many modern control algorithms (such as Model Predictive Control) rely on solving complicated online optimization problems at each iteration. In this paper, we present a simple yet effective multi-threading technique to optimize the throughput of online-control algorithms for resource-constrained autonomous racing platforms. We achieve this by maintaining a systematic pool of worker threads solving the optimization problem in parallel which can improve the system performance by reducing latency between control input commands. We further demonstrate the effectiveness of our method using the Model Predictive Contouring Control (MPCC) algorithm running on Nvidia's Xavier AGX platform.
translated by 谷歌翻译
实时机器学习检测算法通常在自动驾驶汽车技术中发现,并依赖优质数据集。这些算法在日常条件以及强烈的阳光下都能正常工作。报告表明,眩光是撞车事故最突出的两个最突出的原因之一。但是,现有的数据集,例如LISA和德国交通标志识别基准,根本不反映Sun Glare的存在。本文介绍了眩光交通标志数据集:在阳光下重大视觉干扰下,具有基于美国的交通标志的图像集合。眩光包含2,157张带有阳光眩光的交通标志图像,从33个美国道路录像带中拉出。它为广泛使用的Lisa流量标志数据集提供了必不可少的丰富。我们的实验研究表明,尽管几种最先进的基线方法在没有太阳眩光的情况下对交通符号数据集进行了训练和测试,但在对眩光进行测试时,它们遭受了极大的痛苦(例如,9%至21%的平均图范围为9%至21%。 ,它明显低于LISA数据集上的性能)。我们还注意到,当对Sun Glare中的交通标志图像进行培训时,当前的架构具有更好的检测准确性(例如,主流算法平均42%的平均地图增益)。
translated by 谷歌翻译
时间序列模型通常处理极端事件和异常,这两者都在现实世界数据集中普遍存在。这样的模型通常需要提供仔细的概率预测,这对于诸如飓风和大流行等极端事件的风险管理至关重要。但是,自动检测并学习对大规模数据集使用极端事件和异常,这是一项挑战,这通常需要手动努力。因此,我们提出了一个异常的预测框架,该框架利用了先前看到的异常作用来提高其在极端事件存在期间和之后的预测准确性。具体而言,该框架会自动提取异常,并通过注意机制将其合并,以提高其未来极端事件的准确性。此外,该框架采用动态不确定性优化算法,以在线方式降低预测的不确定性。所提出的框架表现出一致的卓越精度,而在三个数据集上,与当前预测模型相比,三个具有不同异常的数据集的不确定性。
translated by 谷歌翻译
变压器已被广泛应用于文本分类。不幸的是,现实世界中的数据包含异常和嘈杂的标签,这些标签对最先进的变压器造成了挑战。本文提出了Protoformer,这是一种针对变压器的新型自学习框架,可以利用有问题的样本进行文本分类。原型类型具有嵌入样品的选择机制,使我们能够有效提取和利用异常原型和困难的类原型。我们在具有不同文本结构的数据集上演示了此类功能(例如Twitter,IMDB,Arxiv)。我们还将该框架应用于多个模型。结果表明,原构物可以改善各种经验环境中的电流变压器。
translated by 谷歌翻译
基于深度学习的模型占主导地位的生产推荐系统的当前景观。此外,近年来目睹了模型规模的指数增长 - 从谷歌的2016年模型,最新的Facebook的型号有10亿个参数,具有12万亿参数。型号容量的每次跳跃都有显着的质量增强,这使我们相信100万亿参数的时代即将来临。然而,即使在工业规模数据中心内,这些模型的培训也在挑战。这种困难是从训练计算的惊人的异质性继承 - 模型的嵌入层可以包括总模型尺寸的99.99%,这是极其内存密集的;虽然其余的神经网络越来越多地计算密集型。为支持培训此类巨大模式,迫切需要有效的分布式培训系统。在本文中,我们通过仔细共同设计优化算法和分布式系统架构来解决这一挑战。具体而言,为了确保培训效率和训练精度,我们设计一种新型混合训练算法,其中嵌入层和密集的神经网络由不同的同步机制处理;然后,我们构建一个名为Persia的系统(短暂的并行推荐培训系统,其中包含混合加速),以支持这种混合培训算法。理论上的示范和实证研究均达到100万亿参数,以证明了波斯的系统设计和实施。我们将Pensia公开使用(在https://github.com/persiamml/persia),以便任何人都能够以100万亿参数的规模轻松培训推荐模型。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译